9 research outputs found

    On the continuous contract verification using blockchain and real-time data

    Get PDF
    Supply chains play today a crucial role in the success of a company's logistics. In the last years, multiple investigations focus on incorporating new technologies to the supply chains, being Internet of Things (IoT) and blockchain two of the most recent and popular technologies applied. However, their usage has currently considerable challenges, such as transactions performance, scalability, and near real-time contract verification. In this paper we propose a model for continuous verification of contracts in supply chains using the benefits of blockchain technology and real-time data acquisition from IoT devices for early decision-making. We propose two platform independent optimization techniques (atomic transactions and grouped validation) that enhances data transactions protocol and the data storage procedure and a method for continuous verification of contracts, which allows to take corrective actions to reduce ¿This work has been partially supported by the project “CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones” S2018/TCS-4423 from Madrid Regional Government and by the Spanish Ministry of Science and Innovation Project “New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE)”. Ref. PID2019-107858GB-I00

    A federated content distribution system to build health data synchronization services

    Get PDF
    In organizational environments, such as in hospitals, data have to be processed, preserved, and shared with other organizations in a cost-efficient manner. Moreover, organizations have to accomplish different mandatory non-functional requirements imposed by the laws, protocols, and norms of each country. In this context, this paper presents a Federated Content Distribution System to build infrastructure-agnostic health data synchronization services. In this federation, each hospital manages local and federated services based on a pub/sub model. The local services manage users and contents (i.e., medical imagery) inside the hospital, whereas federated services allow the cooperation of different hospitals sharing resources and data. Data preparation schemes were implemented to add non-functional requirements to data. Moreover, data published in the content distribution system are automatically synchronized to all users subscribed to the catalog where the content was published.This work has been partially supported by the grant “CABAHLA-CM: Convergencia Big data-Hpc: de Los sensores a las Aplicaciones” (Ref: S2018/TCS-4423) of Madrid Regional Government; the Spanish Ministry of Science and Innovation Project ” New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE)”. Ref. PID2019-107858GB-I00; and by the project 41756 “Plataforma tecnológica para la gestión, aseguramiento, intercambio y preservación de grandes volúmenes de datos en salud y construcción de un repositorio nacional de servicios de análisis de datos de salud” by the FORDECYT-PRONACES

    Improving performance and capacity utilization in cloud storage for content delivery and sharing services

    Get PDF
    Content delivery and sharing (CDS) is a popular and cost effective cloud-based service for organizations to deliver/share contents to/with end-users, partners and insider users. This type of service improves the data availability and I/O performance by producing and distributing replicas of shared contents. However, such a technique increases the storage/network resources utilization. This paper introduces a threefold methodology to improve the trade-off between I/O performance and capacity utilization of cloud storage for CDS services. This methodology includes: i) Definition of a classification model for identifying types of users and contents by analyzing their consumption/ demand and sharing patterns, ii) Usage of the classification model for defining content availability and load balancing schemes, and iii) Integration of a dynamic availability scheme into a cloud based CDS system. Our method was implemented ¿This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under the grant TIN2016-79637-P ”Towards Unification of HPC and Big Data Paradigms

    A policy-based containerized filter for secure information sharing in organizational environments

    Get PDF
    In organizational environments, sensitive information is unintentionally exposed and sent to the cloud without encryption by insiders that even were previously informed about cloud risks. To mitigate the effects of this information privacy paradox, we propose the design, development and implementation of SecFilter, a security filter that enables organizations to implement security policies for information sharing. SecFilter automatically performs the following tasks: (a) intercepts files before sending them to the cloud; (b) searches for sensitive criteria in the context and content of the intercepted files by using mining techniques; (c) calculates the risk level for each identified criterion; (d) assigns a security level to each file based on the detected risk in its content and context; and (e) encrypts each file by using a multi-level security engine, based on digital envelopes from symmetric encryption, attribute-based encryption and digital signatures to guarantee the security services of confidentiality, integrity and authentication on each file at the same time that access control mechanisms are enforced before sending the secured file versions to cloud storage. A prototype of SecFilter was implemented for a real-world file sharing application that has been deployed on a private cloud. Fine-tuning of SecFilter components is described and a case study has been conducted based on document sharing of a well-known repository (MedLine corpus). The experimental evaluation revealed the feasibility and efficiency of applying a security filter to share information in organizational environmentsThis work has been partially supported by the Spanish “Ministerio de Economia y Competitividad” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Dataparadigms”

    A containerized service for clustering and categorization of weather records in the cloud

    Get PDF
    This paper presents a containerized service for clustering and categorization of weather records in the cloud. This service considers a scheme of microservices and containers for organizations and end-users to manage/process weather records from the acquisition, passing through the prepossessing and processing stages, to the exhibition of results. In this service, a specialized crawler acquires records that are delivered to a microservice of distributed categorization of weather records, which performs clustering of acquired data (the temperature and precipitation) by spatiotemporal parameters. The clusters found are exhibited in a map by a geoportal where statistic microservice also produce results regression graphs on-the-fly. To evaluate the feasibility of this service, a case study based on 33 years of daily records captured by the Mexican weather station network (EMAS-CONAGUA) has been conducted. Lessons learned in this study about the performance of record acquisition, clustering processing, and mapping exhibition are described in this paper. Examples of utilization of this service revealed that end-users can analyze weather parameters in an efficient, flexible and automatic manner.This work was partially supported by the sectoral fund of research, technological development and innovation in space activities of the Mexican National Council of Science and Technology (CONACYT) and the Mexican Space Agency (AEM), project No.262891

    A gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers

    Get PDF
    Software pipelines enable organizations to chain applications for adding value to contents (e.g., confidentially, reliability, and integrity) before either sharing them with partners or sending them to the cloud. However, the pipeline components add overhead when processing large volumes of data, which can become critical in real-world scenarios. This paper presents a gearbox model for processing large volumes of data by using pipeline systems encapsulated into virtual containers. In this model, the gears represent applications, whereas gearboxes represent software pipelines. This model was implemented as a collaborative system that automatically performs Gear up (by using parallel patterns) and/or Gear down (by using in-memory storage) until all gears produce uniform data processing velocities. This model reduces delays and bottlenecks produced by the heterogeneous performance of applications included in software pipelines. The new container tool has been designed to encapsulate both the collaborative system and the software pipelines into a virtual container and deploy it on IT infrastructures. We conducted case studies to evaluate the performance of when processing medical images and PDF repositories. The incorporation of a capsule to a cloud storage service for pre-processing medical imagery was also studied. The experimental evaluation revealed the feasibility of applying the gearbox model to the deployment of software pipelines in real-world scenarios as it can significantly improve the end-user service experience when pre-processing large-scale data in comparison with state-of-the-art solutions such as Sacbe and Parsl.This work has been partially supported by the “Spanish Ministerio de Economia y Competitividad ” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Data paradigms”

    On the continuous processing of health data in edge-fog-cloud computing by using micro/nanoservice composition

    Get PDF
    The edge, the fog, the cloud, and even the end-user's devices play a key role in the management of the health sensitive content/data lifecycle. However, the creation and management of solutions including multiple applications executed by multiple users in multiple environments (edge, the fog, and the cloud) to process multiple health repositories that, at the same time, fulfilling non-functional requirements (NFRs) represents a complex challenge for health care organizations. This paper presents the design, development, and implementation of an architectural model to create, on-demand, edge-fog-cloud processing structures to continuously handle big health data and, at the same time, to execute services for fulfilling NFRs. In this model, constructive and modular blocksblocks , implemented as microservices and nanoservices, are recursively interconnected to create edge-fog-cloud processing structures as ÂżThis work was supported in part by the Council for Science and Technology of Mexico (CONACYT) through the Basic Scientific Research under Grant 2016-01-285276, and in part by the Project CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones from Madrid Regional Government under Grant S2018/TCS-4423

    Prevalence of Barrett's esophagus: An observational study from a gastroenterology clinic

    No full text
    Introduction and aims: Barrett's esophagus is a condition that predisposes to esophageal adenocarcinoma. Our aim was to establish the prevalence of Barrett's esophagus at our center, as well as determine its associated factors. Materials and methods: We retrospectively assessed the endoscopic reports of 500 outpatients seen at our Gastroenterology Service from November 2014 to April 2016. We determined the prevalence of Barrett's esophagus and analyzed the demographic, clinical, and endoscopic findings associated with that pathology. Results: The prevalence of Barrett's esophagus was 1.8%. The mean age of the patients with Barrett's esophagus was 58.7 years (range: 45-70) and there was a predominance of men (66%). In the subgroup of patients with symptoms of gastroesophageal reflux (n = 125), Barrett's esophagus prevalence was 7.2%. In the multivariate analysis, the factors that were independently associated with Barrett's esophagus were gastroesophageal reflux (P=.005) and hiatal hernia (P=.006). Conclusions: The overall prevalence of Barrett's esophagus was 1.8% in our population, with a prevalence of 7.2% in patients that had symptoms of gastroesophageal reflux. Resumen: Introducción y objetivos: El esófago de Barrett es una condición que predispone al adenocarcinoma esofágico. Nuestro objetivo fue establecer la prevalencia de esófago de Barrett en nuestro centro, así como los factores asociados a esta condición. Material y métodos: Evaluamos retrospectivamente los reportes de 500 endoscopias superiores de pacientes ambulatorios de nuestro Servicio de Gastroenterología entre noviembre del 2014 y abril del 2016. Se determinó la prevalencia de esófago de Barrett y se analizaron los datos demográficos, clínicos y endoscópicos asociados a esta patología. Resultados: La prevalencia de esófago de Barrett fue del 1.8%. La edad media en los pacientes con esófago de Barrett fue de 58.7 años (rango: 45-70), con predominancia del sexo masculino (66%). En el subgrupo de pacientes con síntomas de reflujo gastroesofágico (n = 125) la prevalencia de esófago de Barrett fue del 7.2%. En el análisis multivariado los factores asociados a esófago de Barrett de forma independiente fueron: síntomas de reflujo gastroesofágico (p = 0.005) y hernia hiatal (p = 0.006). Conclusión: La prevalencia global de esófago de Barrett es del 1.8% en nuestra población, con una prevalencia del 7.2% en pacientes con síntomas de reflujo gastroesofágico. Keywords: Intestinal metaplasia, Adenocarcinoma, Gastroesophageal reflux, Hiatal hernia, Endoscopy, Palabras clave: Metaplasia intestinal, Adenocarcinoma, Reflujo gastroesofágico, Hernia hiatal, Endoscopi

    From the edge to the cloud: A continuous delivery and preparation model for processing big IoT data

    No full text
    This research was partially supported by "Fondo Sectorial de Investigación para la Educación", SEP-CONACyT Mexico, under grantnumbers 281565 and 285276, and by Madrid Regional Government (Spain) under the grant ”Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)”, ref: S2018/TCS-4423
    corecore